Introduction


In the 4th quarter of a 2024 Week 12 game between the Baltimore Ravens and Los Angeles Chargers, the Chargers had the ball on 3rd and 5 at their own 42 yard line, down 30-16 with 2:53 left on the game clock. The Ravens rushed 4, dropping 7 defenders into coverage, and the play resulted in a Justin Herbert sack and a loss of 4 yards. While this sounds like an extremely basic play, watching the film tells a completely different story:


We see from the film that the Ravens start the play with 3 down linemen, a linebacker walked up in the left B-Gap, and an edge defender lined up outside the right tackle. Based on their formation, if we knew before the play started that the Ravens would be rushing 4 and dropping 7 into coverage, we’d expect those 4 pass rushers to be some combination of these 5 players up around the line of scrimmage. In actuality, two of the three down linemen, including 2023 second team All-Pro Nnamdi Madubuike, plus the B-Gap linebacker, end up dropping into coverage, and in their place, former All-Pro cornerback Marlon Humphrey and safety Brandon Stephens, who eventually recorded the sack, come on the pass rush, resulting in mass confusion for the Chargers offense, and a big time sack at a pivotal moment in the game.

Plays like these happen around the NFL. Defensive play callers like Minnesota’s Brian Flores, Seattle’s Mike McDonald, the Chargers’ Jesse Minter, and Zach Orr from the aforementioned Baltimore Ravens have built their defensive identities around trying to generate as much confusion as possible for offenses, and having this confusion lead to big time plays for the defense. Intuitively, this makes sense; the easier it is to predict what the defense is going to be doing, the easier it should be to execute a plan to defeat them, so the inverse is true when the defense becomes more unpredictable. Here, we will attempt to find a way to not only measure the level of confusion that defenses can generate by being unpredictable with the pass rush, but also capture the negative effect that unpredictability has on the offense.

To quantify unpredictability, specifically pass rush unpredictability, we first need to create a model that predicts the pass rush. We can’t know what action is unpredictable and how unpredictable it is without first having a baseline of what expected is. Coaches, quarterbacks, and offensive linemen take into account which players line up where, what direction they’re facing, how fast they’re moving, and try to predict which individual players will be a part of the pass rush. Our model will use the proprietary NFL player tracking data to do just that: find the probability that a player will rush the passer for each individual player on the defense based on their location, direction, speed, position, and more. Then, using the results of that model, we will build two new statistics: Unexpected Index, which will quantify the magnitude of unpredictability for a player on a particular play, and Unexpected Rate, the rate at which players do the opposite of their predicted action.



Data Cleaning


Before we can construct our pass rush prediction model and engineer our new metric, we need to ensure that our data is in the proper format for us to build our model. Our initial dataset is comprised of all plays from all games during weeks 1-9 of the 2022 NFL season. The full data cleaning process can be found here , but we’ll outline the general process here.

First and foremost, since we’re building a pass rush model, obviously we only want dropback plays. Notice the difference between saying we want passing plays versus saying we want dropback plays - including all dropbacks ensures that we include plays like quarterback scrambles and sacks, or any other dropback plays that don’t actually result in a pass attempt. However, we want to exclude plays with either a QB spike or a QB kneel, since those plays won’t have any legitimate pass rush to be able to predict, and those plays could effect how the model works. Also, for the sake of simplicity within the model, we will not be taking into account the positions of any of the offensive players on a particular play. While certain models aimed at accomplishing the same result we’re after here would account for the position of all 22 players on the field at all times for each prediction, we will only be using the data for one player for each prediction, which not only keeps our model as simple as possible, but also still yields a statistically significant outcome, which we will see later in our Results section. Along these same lines, we will also be removing any plays where an offensive player is playing defense, since the players position will be accounted for by our model.

Next, and perhaps most importantly, we need to standardize our tracking data. The tracking data, in its current format, has the (x,y) coordinates of each player, and the ball, at each 0.10 seconds of real time, with the x coordinate corresponding to the yard line on the field, with zero being the back of the left endzone, and the y coordinate corresponding to where the player/ball are horizontally on the field, with zero being the home sideline. We also have the direction (angle of player motion) and orientation (where the player is facing), which is measured in degrees:

If we were to build our model with the data in its current format, we would need to have a set of variables for the ball, along with the tracking data for the player. However, through the standardization process, we make it so that the ball is the coordinate point (0,0) and the (x,y) coordinates of the player tell us how far away from the ball they are, measured in yards, removing a layer of complexity from our model. Furthermore, in the standardization process, we made it so that the x-coordinate of the player is always positive, further simplifying the data within the model. Of course, making all the x-coordinates positive also means rotating the y-coordinate, direction, and orientation for all plays where the x-coordinate would typically be negative.

Finally, we need to consider the time frame for which we want to have data from. Using the tracking data, we can narrow the time frame to all tracked points between “line set”, the point at which the offensive players initially come set, and when the ball is snapped. Once we have these times singled out, we can convert the time into a variable that tracks the time between the current time and the time of the snap.



Model Selection


Now that we have our data set up in the proper format, we can move on to building our model . Since our goal is to be able to predict the binary outcome of whether or not each individual player will be rushing the passer, we will be building a logistic regression model, with the response variable being whether or not the player is part of the initial pass rush. The output of our model will be a probability of whether or not a defender will be a pass rusher, and we will then convert that probability into a binary ‘yes’ or ‘no’ outcome based on the probability; any probability greater that 0.5, and we will say that we expect that player to be a pass rusher.

While fitting our models, we will use a rotating train-test split to be able to find a prediction for every play of each game for the available dataset - to fit predictions for Week 1, we’ll train the model on weeks 2-9, for Week 2, we fit the model using the data for weeks 1 and 3-9, and so on and so forth. There are multiple ways we can go about building this model, but we will focus on three possible methods:


Method One

The first modeling method we will use is our most complex one. For this strategy, we will fit the model to find a probability that a player will rush the passer for each 0.10 second between the offensive line set and the moment the ball is snapped, and then calculate the average probability of rushing the passer to serve as our point estimate. We will use the standard (x,y) coordinates, as well as the standardized direction and orientation, the speed and acceleration of the player at that point in time, as well as their distance traveled since the last point in time and the players position, along with important game context variables like quarter, down and distance, game clock, and yards from the end zone:

Step One

  • Fit model for each 0.10 second based on all relevant tracking and game context variables
Week Team Player Name Standard X Standard Y Standard
Direction
Standard
Orientation
Time to Snap Was Pass Rusher? Pass Rush
Probability
1 DEN Randy Gregory 1.620 5.86 3.58 205.82 1.000 Yes 0.706
1 DEN Randy Gregory 1.630 5.86 73.50 207.45 0.900 Yes 0.724
1 DEN Randy Gregory 1.640 5.86 93.34 208.87 0.800 Yes 0.718
1 DEN Randy Gregory 1.670 5.86 95.60 208.87 0.700 Yes 0.716
1 DEN Randy Gregory 1.690 5.86 98.05 207.96 0.600 Yes 0.716
1 DEN Randy Gregory 1.720 5.85 101.29 202.84 0.500 Yes 0.710
1 DEN Randy Gregory 1.730 5.85 100.96 204.70 0.400 Yes 0.700
1 DEN Randy Gregory 1.720 5.85 144.45 210.17 0.300 Yes 0.719
1 DEN Randy Gregory 1.710 5.84 233.89 210.17 0.200 Yes 0.737
1 DEN Randy Gregory 1.660 5.84 268.65 213.36 0.100 Yes 0.761

Step Two

  • Take an average of the individual probabilities to create a point estimate

  • Use the point estimate from the previous step to create the classifier

Week Team Player Name Was Pass Rusher? Average Pass
Rush Probability
Classifier
1 DEN Randy Gregory Yes 0.742 1

This strategy allows us to be able to track how an individual player’s pass rush probability changes as they move through space and time, but still gives us a point estimate to use to test our error rates and eventually build our metric.


Method Two

The second method is somewhat similar to the first. Although, this time, instead of fitting a probability for each individual point in time and then taking the average, we will fit one probability based on the player’s average tracking data points - i.e. their average standardized x and y coordinates, average direction and orientation, and average acceleration, speed, and distance traveled. Similar to Method One, this modeling strategy attempts to account for how a player moves through space and time while still giving us a point estimate:

Week Team Player Name Average X Average Y Average
Direction
Average
Orientation
Average
Speed
Average
Distance
Average
Acceleration
Was Pass Rusher? Pass Rush
Probability
Classifier
1 DEN Randy Gregory 1.647 5.837 184.948 216.136 0.120 0.013 0.231 Yes 0.730 1


Method Three

The third modeling strategy we will try is using a fixed point in time to base our predictions off of. Specifically, we will use a players standardized (x,y) coordinates, standardized direction and orientation, as well as the other ancillary metrics, at the fixed point of one second before the snap.

While this time is completely arbitrarily selected, it does make some intuitive sense - one second before the snap is early enough before the snap that the quarterback and offensive linemen are able to get one last good look at where all the defenders are, but close enough to the snap that not only does the defense not have time to make any meaningful changes to their current tracked position, but any changes that they do make in that last second will be hardly noticeable enough by the offensive players to meaningfully change their expectations of what the defenders will do.


We will decide which version of the model we will use by selecting the model that has the lowest error rate. In the context of this scenario, that means we will use the model that has the fewest incorrect predictions about which players will be involved with the pass rush based on our Classifier variable:

Pass Rush Prediction Model Selection
Error Rate by Modeling Method
Method
Method 1 Method 2 Method 3
Error Rate 0.0528 0.0531 0.0541

As we can see from comparing the error rates, while all three modeling methods perform similarly, the model we will be using for building our predictions and new metrics will be Methone One, the one that fits a pass rush probability for each player at each 0.10 seconds between the offensive formation initially coming set and the ball snapping, and then takes the average probability to create our final prediction.



Feature Engineering: Unexpected Index and Unexpected Rate


Now that we have our model for fitting pass rush probabilities, we can find a probability for both the expected and unexpected action, whether that action is rushing the passer or dropping into coverage:

Week Team Player Name Was Pass Rusher? Average
Pass Rush Probability
Classifier Probability
of Expected Action
Probability
of Actual Action
1 BUF Von Miller No 0.848 1 0.848 0.152
1 BUF Micah Hyde No 0.034 0 0.966 0.966
1 BUF Jordan Poyer No 0.003 0 0.997 0.997
1 BUF Jordan Phillips No 0.995 1 0.995 0.005
1 BUF Matt Milano Yes 0.428 0 0.572 0.428
1 BUF Tremaine Edmunds No 0.515 1 0.515 0.485
1 BUF Taron Johnson No 0.014 0 0.986 0.986
1 BUF Tim Settle Yes 0.996 1 0.996 0.996
1 BUF Dane Jackson No 0.056 0 0.944 0.944
1 BUF Gregory Rousseau Yes 0.981 1 0.981 0.981
1 BUF Christian Benford No 0.019 0 0.981 0.981

From these probabilities, we can derive the equation for a new metric: Unexpected Index:

\[ Unx_{player}\ =\ P(Expected\ Action) - P(Actual\ Action) \]

This Unexpected Index is a way of measuring the magnitude of how unexpected the action of a particular player is. If a player’s actual action (rush or coverage) matches their expected action, their Unexpected Index will be 0.

To find the Unexpected Index for a full play, we take the sum of the individual player indices:

\[ Unx = \sum P(Expected\ Action) - P(Actual\ Action) \]

If every player does their expected action, then the Unexpected Index for the play will be 0. If every player had an over 99% probability of either rushing or covering, but then did the opposite, the Unexpected Index would approach 11, giving us a lower and upper bound for our statistic

We can also use this player level Unexpected Index to create Unexpected Rate. Unexpected Rate is simply the percentage of players who have a positive Unexpected Index on a play:

\[ Unx\% \ = \ \frac{\sum Unx_{player} > 0}{11} \]

Since Unexpected Rate is represented as a rate/percentage, it will always be between 0 and 1.

This is how those equations can be used to calculate Unexpected Index and Unexpected Rate for the individual players:

Week Team Player Name Was Pass Rusher? Average
Pass Rush Probability
Classifier Probability
of Expected Action
Probability
of Actual Action
Unexpected Index Unexpected Rate
1 BUF Von Miller No 0.848 1 0.848 0.152 0.696 1.000
1 BUF Micah Hyde No 0.034 0 0.966 0.966 0.000 0.000
1 BUF Jordan Poyer No 0.003 0 0.997 0.997 0.000 0.000
1 BUF Jordan Phillips No 0.995 1 0.995 0.005 0.990 1.000
1 BUF Matt Milano Yes 0.428 0 0.572 0.428 0.144 1.000
1 BUF Tremaine Edmunds No 0.515 1 0.515 0.485 0.030 1.000
1 BUF Taron Johnson No 0.014 0 0.986 0.986 0.000 0.000
1 BUF Tim Settle Yes 0.996 1 0.996 0.996 0.000 0.000
1 BUF Dane Jackson No 0.056 0 0.944 0.944 0.000 0.000
1 BUF Gregory Rousseau Yes 0.981 1 0.981 0.981 0.000 0.000
1 BUF Christian Benford No 0.019 0 0.981 0.981 0.000 0.000

Using these same equations, here are the top 10 plays in the NFL from 2022 weeks 1-9 in Unexpected Rate and Unexpected Index:

Top 10 Most Unexpected Plays
By Unexpected Index, 2022 Weeks 1-9
Team Week playId Unexpected
Index
Unexpected Rate EPA
PIT 1 4434 3.804 0.364 1.650
BAL 1 1879 3.611 0.455 −0.118
ARI 5 2513 3.359 0.455 −0.355
BUF 5 3384 3.357 0.364 −1.191
DAL 2 3331 3.287 0.364 1.800
TEN 4 141 3.258 0.455 2.997
DAL 5 3375 3.245 0.364 −1.183
BAL 5 3452 3.235 0.364 0.554
NYG 3 2683 3.181 0.364 1.907
NE 6 3136 3.180 0.455 −2.243
Top 10 Most Unexpected Plays
By Unexpected Rate, 2022 Weeks 1-9
Team Week playId Unexpected
Index
Unexpected Rate EPA
BAL 1 1879 3.611 0.455 −0.118
ARI 5 2513 3.359 0.455 −0.355
TEN 4 141 3.258 0.455 2.997
NE 6 3136 3.180 0.455 −2.243
NYG 7 3234 3.035 0.455 2.661
NYG 2 3706 2.700 0.455 −2.145
ARI 1 3492 2.696 0.455 2.772
NYG 3 3176 2.658 0.455 3.030
NYG 5 1863 2.450 0.455 −0.286
ATL 5 504 2.395 0.455 1.124


Results: Why It Matters


Now that we have our new metrics, we can use them to see how defenses can leverage unpredictability to effect offenses. We will measure the negative effects on the offense using Expected Points Added (EPA) and an EPA-based Success Rate, where a successful play is a play with a positive EPA.

If we look at the relationship between Unexpected Rate and both EPA and Success Rate, we see that as Unexpected Rate goes up, both EPA and Success Rate have noticeable decreases:

However, it’s difficult to be able to draw any meaningful conclusions for Unexpected Rates of 0.273 (3 players doing the unexpected action) or greater, since there are so few plays with such high Unexpected Rates. So, to help alleviate the sample size issues, we can look at all plays with Unexpected Rates of 0.182 or higher, meaning at least two players doing the unexpected action:

What these graphs show us is that when a play has an Unexpected Rate of 0.182 or higher, defenses go from allowing -0.019 EPA per Play on plays with an Unexpected Rate of 0 to -0.164 EPA per Play, and Success Rate allowed goes from 44.1% to 40.3%. So, on a per play basis, while there might not be any meaningful difference between having 0 and 1 player doing the opposite of what our model predicts, there is an over 8.5x decrease in the total points added per play, and an almost 10% decrease in the rate of plays where offenses produce a positive EPA.

Intuitively, this makes sense. The better job the defense can do at varying the scheme and maintaining a level of unpredictability and confusion, the better off they’re likely to be. But are these differences actually significant? We can test this by doing a one-sided p-test to determine the statistical significance of these results.

First, to implement our p-test, we will start by building a sample distribution of EPA per Play. Our population is the total number of non-QB Kneel/non-QB Spike dropbacks from weeks 1-9 of the 2022 NFL season, which comes out to 9210 total plays. Then, we will take a random sample of 1300 plays from our population of 9210 plays and calculate the EPA per Play across that sample. While the consensus baseline for sample sizes when building a sampling distribution for this kind of analysis is at least 10% of the total population size, we will be using 1300 in this instance because our sample that we want to test, the number of plays with Unexpected Rate of at least 0.182, is 1316. By building a sampling distribution with the same, or at least a similar, sample size as the sample we’re trying to test, we can draw meaningful conclusions about a sample of that particular size.

We then repeat the process of taking a random sample of 1300 plays and calculating the EPA per Play until we have 500,000 samples. From here, we can calculate the probability p that the EPA of our test sample, which was -0.164, would be able to occur randomly if there was no other contributing factors. After completing that calculation, we get a p-value of 0.00088:

This means that there is a less than 0.1% chance that a sample of at least 1300 plays would have an EPA per Play of -0.164 or lower. Considering anything lower that 0.05, or 5%, is typically considered significant, this is a very statistically significant result, and tells us that there is some meaningfulness to our results we saw in our graphs earlier that show a relationship between an increased Unexpected Rate and a decrease in EPA per Play.


We can repeat this same process to test the significance of our Success Rate results, and in doing so, we see that we get a p-value of 0.0033:

This result means that there is a roughly 0.33% chance, one third of one percent, that a sample of 1300 plays would have a Success Rate of .403 or lower.


So, now that we know that there is a statistically significant relationship between the number of players who contradict what the pass rush model predicts they will do and how that impacts the offenses ability to perform, which teams were able to leverage this schematic strategy the best? Below is a table that shows how often a team had at least one player with a positive Unexpected Index (Unexpected Play Rate), their EPA per Play and Success Rate allowed on plays where the Unexpected Rate was zero, their Unexpected Rate, EPA per Play, and Success Rate on plays where the Unexpected Rate was not zero, and how their EPA per Play and Success Rate changed. Just a reminder, since we are viewing this from a team level this time, as opposed to a play level, the Unexpected Rate can be interpreted as the percentage of players have a positive Unexpected Index per play. Also, it is important to note that negative EPA values and lower Success Rates are better for the defense.

Rate of Unexpected Plays and EPA per Play
Weeks 1-9, all dropbacks, excluding QB Kneels or QB Spikes
Team
Overall
Unexpected Rate = 0
Unexpected Rate > 0
Difference
Total Plays Unexpected Play Rate Total EPA per Play Total Success Rate Plays EPA per Play Success Rate Plays Unexpected Rate EPA per Play Success Rate EPA Diff Success Diff
DEN 292 0.572 -0.264 0.384 125 -0.238 0.408 167 0.125 -0.284 0.365 -0.046 -0.043
ARI 321 0.523 0.026 0.517 153 0.153 0.582 168 0.151 -0.089 0.458 -0.242 -0.124
ATL 352 0.509 0.117 0.526 173 0.092 0.497 179 0.146 0.141 0.553 0.049 0.056
MIA 314 0.503 0.040 0.459 156 -0.013 0.423 158 0.139 0.092 0.494 0.105 0.071
TB 312 0.503 -0.130 0.401 155 -0.043 0.406 157 0.153 -0.215 0.395 -0.172 -0.011
NYG 260 0.492 -0.052 0.423 132 -0.093 0.417 128 0.169 -0.010 0.430 0.083 0.013
CAR 290 0.490 0.077 0.466 148 0.177 0.459 142 0.154 -0.027 0.472 -0.204 0.013
NE 321 0.474 -0.221 0.393 169 -0.362 0.349 152 0.130 -0.065 0.441 0.297 0.092
LA 260 0.458 -0.002 0.450 141 0.138 0.461 119 0.128 -0.167 0.437 -0.305 -0.024
CIN 282 0.443 -0.084 0.411 157 0.005 0.427 125 0.140 -0.196 0.392 -0.201 -0.035
BAL 343 0.426 -0.015 0.446 197 -0.087 0.426 146 0.162 0.083 0.473 0.170 0.047
WAS 298 0.426 -0.002 0.423 171 0.211 0.491 127 0.123 -0.288 0.331 -0.499 -0.160
DET 273 0.425 0.118 0.473 157 0.159 0.503 116 0.146 0.062 0.431 -0.097 -0.072
JAX 308 0.422 -0.053 0.419 178 -0.026 0.433 130 0.143 -0.091 0.400 -0.065 -0.033
LAC 267 0.419 0.024 0.438 155 -0.030 0.400 112 0.119 0.099 0.491 0.129 0.091
DAL 259 0.413 -0.190 0.405 152 -0.178 0.454 107 0.132 -0.206 0.336 -0.028 -0.118
PIT 289 0.401 0.074 0.436 173 0.028 0.428 116 0.121 0.141 0.448 0.113 0.020
LV 278 0.367 0.212 0.518 176 0.122 0.517 102 0.132 0.367 0.520 0.245 0.003
KC 312 0.365 0.044 0.471 198 0.170 0.520 114 0.167 -0.176 0.386 -0.346 -0.134
SEA 307 0.358 0.006 0.414 197 -0.035 0.411 110 0.125 0.078 0.418 0.113 0.007
HOU 241 0.357 0.063 0.432 155 0.037 0.406 86 0.114 0.109 0.477 0.072 0.071
CLE 250 0.336 -0.078 0.400 166 -0.018 0.410 84 0.133 -0.196 0.381 -0.178 -0.029
NO 293 0.304 -0.002 0.410 204 -0.086 0.387 89 0.144 0.192 0.461 0.278 0.074
SF 230 0.304 -0.017 0.417 160 0.023 0.425 70 0.118 -0.109 0.400 -0.132 -0.025
GB 238 0.303 -0.080 0.408 166 -0.142 0.416 72 0.114 0.062 0.389 0.204 -0.027
CHI 249 0.265 0.037 0.490 183 0.032 0.492 66 0.145 0.052 0.485 0.020 -0.007
TEN 330 0.261 -0.051 0.448 244 0.007 0.443 86 0.142 -0.214 0.465 -0.221 0.022
BUF 265 0.238 -0.130 0.438 202 -0.194 0.436 63 0.128 0.074 0.444 0.268 0.008
PHI 294 0.235 -0.245 0.395 225 -0.131 0.422 69 0.116 -0.614 0.304 -0.483 -0.118
MIN 284 0.229 -0.052 0.433 219 -0.040 0.434 65 0.126 -0.089 0.431 -0.049 -0.003
NYJ 324 0.222 -0.216 0.389 252 -0.181 0.393 72 0.111 -0.337 0.375 -0.156 -0.018
IND 274 0.190 -0.010 0.449 222 0.042 0.464 52 0.108 -0.233 0.385 -0.275 -0.079


Limitations and Looking Forward


What we have shown here is that Unexpected Rate is, at least based on the sample sizes we have available to us, is that Unexpected Rate is a threshold stat. What we mean by that is instead of seeing a consistent decrease in EPA as Unexpected Rate goes up, we need to reach a threshold, in this instance an Unexpected Rate of 0.182, for Unexpected Rate to be significant. Our initial graphs did show that there might be a consistent decrease in EPA as Unexpected Rate goes up, but unfortunately the sample sizes available to us were too small, so we will stick with the simple threshold for now. While we were able to show that there is a meaningful relationship between a play’s Unexpected Rate and the EPA allowed on that play, that doesn’t mean we’ve completely answered all the possible questions we can ask about this subject.

First, from a model building perspective, there are a lot of factors that we are choosing to not explicitly account for. For example, we don’t account for the positioning of the other 10 players on the defense, or the locations of the 11 players on offense, or the offensive and defensive personnel packages, all of which can heavily influence the actions of a specific player on a particular play. That being said, the interaction between one player and the other 21 players on the field at a given time is somewhat built into the model already - the individual defensive player’s alignment and how they move through space and time on the football field is heavily influenced and dictated by where the offensive players are positioned and where the other defensive players are around him. A cornerback doesn’t align 12 yards outside the ball and 4 yards off the line of scrimmage for no reason - they’re there because that’s where the receiver they’re covering lined up, and the defensive play call dictates that they play off coverage. The same can be said of every player on the defense.

We also don’t account for, or attempt to predict, how the pass coverage scheme the defense plays also effects this. While our model is specifically designed to model unpredictability within the pass rush, it would be completely naive to pretend that the pass rush and the coverage scheme are independent of each other - they work hand in hand on every play. Now, to a certain extent, we do model pass coverage - any player that is expected to rush but drops into coverage is a factor in our model - but we are not explicitly modeling it.

Redoing this process from a pass coverage perspective could take a lot of different forms. We could do a simple man versus zone prediction, but the final form of that prediction would be predicting a specific coverage scheme - Cover 1, Quarters, etc - and comparing that to the actual coverage, seeing how often teams show these disguised coverage, and then seeing which shown versus played coverages, i.e showing Cover 2 but playing Cover 0, result in the most drastic effects for the offense.



Citations


Michael Lopez, Thompson Bliss, Ally Blake, Paul Mooney, and Addison Howard. NFL Big Data Bowl 2025. https://kaggle.com/competitions/nfl-big-data-bowl-2025, 2024. Kaggle.